专利摘要:
The present invention provides a precise positioning system for a three-dimensional position of a tumor by multi-image fusion. The system comprises: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor. The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor.
公开号:NL2025814A
申请号:NL2025814
申请日:2020-06-11
公开日:2021-05-17
发明作者:Yuan Shuanghu;Li Wei;Liu Ning;Wei Yuchun;Dong Leilei;Wang Suzhen;Yu Jinming;Li Li;Li Xiaoxiao;Liu Wenju
申请人:Shandong Cancer Hospital And Inst;Univ Shandong;
IPC主号:
专利说明:

PRECISE POSITIONING SYSTEM FOR THREE-DIMENSIONAL POSITION OF TUMOR BY MULTI- IMAGE FUSION Field of the Invention The present invention belongs to the field of image processing, and particularly relates to a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
Background of the Invention The statement of this section merely provides background art information related to the present invention, and does not necessarily constitute the prior art.
With the development of medical imaging technology, modern medical treatment is closely related to medical imaging information.
The diagnosis and condition of most diseases require evidences from medical images.
Different medical images provide different information of related organs: CT and MR provide structural information such as anatomical structures of organs with high spatial resolution, and SPECT and PET provide functional information such as blood perfusion of organs.
If the information of structural images and functional images is organically fused and comprehensively processed to obtain new information, new thoughts will be brought to clinical diagnosis and treatment.
Multi-image fusion imaging can improve the sensitivity and specificity of tumor diagnosis, and also provide more information for positioning of biopsy, thereby reducing the deficiencies of morphological imaging.
However, how to register and fuse multi-modal images to implement precise three-dimensional positioning of a tumor is crucial.
Summary of the Invention In order to solve the above problems, the present invention proposes a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor.
According to some embodiments, the present invention adopts the following technical solution: A precise positioning system for a three-dimensional position of a tumor by multi-image fusion, including: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and the fused image can provide a more intuitive, more comprehensive, and clearer image basis. Segmentation and three-dimensional reconstruction are performed on the fused image to precisely position a tumor.
As a further limitation, the image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window; a one-dimensional sequence ff), fa is assumed, the length of the window 1s m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, and the number in the center is used as an output of filtering.
As a further limitation, the image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
As a further limitation, the registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
As a further limitation, the registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
As a further limitation, the segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
Compared with the prior art, the beneficial effects of the present invention are: The present invention uses medical image fusion technology to properly fuse multi-modal images that provide structural and functional medical information, which can provide a more comprehensive basis for judgment, and makes up for the shortcomings of single-mode images in providing one-sided information.
The present invention uses a combination of median filtering and wavelet transformation edge enhancement in image preprocessing, which effectively removes noise, retains signals with better smoothness, can enhance image edges and has better visual effects.
The present invention uses a registration method based on mutual information and a wavelet pyramid fusion method in registration and fusion, which almost can be used for registration of images in any different modes, further enhances image edge information, and avoids dark images and new interference caused by inverse wavelet transformation.
The present invention uses a Snake model image segmentation and positioning method based on a competitive neural network in segmentation and positioning, wherein the competitive neural network is used for initial segmentation of images, thereby realizing an automatic segmentation technology, solving the sensitivity of a Snake model to the initial contour, and well solving the shortcoming of unsatisfactory detection effects on concave contours or convex contours with high curvatures.
Brief Description of the Drawings The accompanying drawings constituting a part of the present application are used for providing a further understanding of the present application, and the schematic embodiments of the present application and the description thereof are used for interpreting the present application, rather than constituting improper limitations to the present application.
Fig. 1 is a flowchart of precise tumor positioning by multi-image fusion according to 5 the present invention; Fig. 2 is a flowchart of Snake model image segmentation and positioning based on a competitive neural network according to the present invention; Fig. 3 is a flowchart of registration based on mutual information according to the present invention.
Detailed Description of the Embodiments The present invention will be further illustrated below in conjunction with the accompanying drawings and embodiments.
It should be pointed out that the following detailed descriptions are all exemplary and aim to further illustrate the present application. Unless otherwise specified, all technological and scientific terms used in the descriptions have the same meanings generally understood by those of ordinary skill in the art of the present application.
It should be noted that the terms used herein are merely for describing specific embodiments, but are not intended to limit exemplary embodiments according to the present application. As used herein, unless otherwise explicitly pointed out by the context, the singular form is also intended to include the plural form. In addition, it should also be understood that when the terms “include” and/or “comprise” are used in the specification, they indicate features, steps, operations, devices, components and/or their combination.
In the present invention, the terms such as “upper”, “lower”, “left”, “right”, “front”, “rear”, “vertical”, “horizontal”, “side”, and “bottom” indicate the orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only relationship terms determined for the convenience of describing the structural relationships of various components or elements of the present invention, but do not specify any component or element in the present invention, and cannot be understood as limitations to the present invention.
In the present invention, the terms such as “fixed”, “coupled” and “connected” should be generally understood, for example, the “connected” may be fixedly connected, integrally connected, detachably connected, directly connected, or indirectly connected by a medium.
For a related scientific research or technical person in this art, the specific meanings of the above terms in the present invention may be determined according to specific circumstances, and cannot be understood as limitations to the present invention.
A precise positioning system for a three-dimensional position of a tumor by multi-image fusion includes: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window.
The image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
The registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
The registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; The low-frequency portion of the last layer is fused by taking the maximum of the coefficients; high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
The segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
As shown in Fig. 1, the working method of the above system mainly includes three aspects: image preprocessing, registration and fusion, segmentation and positioning.
The image preprocessing mainly uses median filtering and wavelet transform edge enhancement technologies; the registration and fusion mainly use a registration method based on mutual information and a wavelet pyramid fusion method; and the segmentation and positioning mainly use a Snake model image segmentation and positioning method based on a competitive neural network.
The images are preprocessed by median filtering, wherein for an image, a rectangular sliding window (the size of the window is generally odd) is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window. A one-dimensional sequence ff), fa 1s assumed, the length of the window is m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, the number in the center is used as an output y; of filtering, and its mathematical formula is expressed as: veMedff fi. fin] i€Z v= re The median filtering of two-dimensional data can be expressed as: YiimMed{ Xig ja Xik+1 jks Xieje} K71,2, Further, the image edge enhancement technology based on wavelet transformation performs wavelet transform decomposition using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into four sub-band images: LL (horizontal low frequency, vertical low frequency), LH (horizontal low frequency, vertical high frequency), HL (horizontal high frequency, vertical low frequency), and HH (horizontal high frequency, vertical high frequency), and a wavelet coefficient of each scale is obtained.
The smaller wavelet coefficients are regarded as noise, and the noise is filtered out by setting an appropriate threshold. The threshold L is obtained according to the variance of each sub-band image, where j is the current number of transformed layers,
and iis 1, 2, 3, which respectively represent HH, HL and LH sub-band images. The wavelet coefficient of each layer is transformed as follows to obtain an estimated value, | (x,v)- 7 w/(x,v)2 1! Ww) (x,y)=10, hw; (x, y) <T; | (,v)+ Tw! (x,y) < Tj Detail information in an image is usually contained in high-frequency components, so after the noise is removed, different enhancement coefficients are required to enhance detail components of the image within different frequency ranges, thereby improving the image quality and enhancing the layering and visual effects. An enhancement coefficient K, is set to enhance the wavelet coefficients after the threshold processing, w(x.) =K xw, (x.y) K, = JixK j is the current number of transformed layers, K is an empirical weight As for the registration process based on mutual information, the mutual information is a similarity measure of statistical correlation between two random variables. If two images are geometrically aligned, the mutual information of their corresponding voxel pairs is maximal. This method requires neither assumption about the relationship between image intensities nor segmentation or any preprocessing on the images, is not sensitive to data missing, and almost can be used for registration of images in any different modes. The general process is shown in Fig. 3.
Two images are registered, one as a reference image R and the other as a floating image F. To calculate the mutual information, coordinate transformation is first performed. The coordinate transformation is to transform pixels of the floating image F and then register the transformed pixels to the reference image R, and rigid transformation is used here. The coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers. The gray values of corresponding points on the reference image R need to be obtained by linear interpolation. Each pixel of the floating image F is rotated and then registered to the R,
and a joint histogram A(F,R) is calculated through the transformed pixel point pairs.
The mutual information formula is: 1(F.R)=Y pFR(f.rlog PER) Tr PE()PR(r) TE in > PIR(S,P) js joint probability distribution, and Ten can be obtained by normalizing the joint gray histogram MER) of the two images.
The edge probability distribution can be directly obtained from the joint probability distribution: pF (f)=>" pFR(f,r), pR() => pFR(f.r). rel feF The wavelet pyramid fusion method is to perform certain layers (three layers) of orthogonal wavelet transformation on the images F and R that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on.
One the one hand of the fusion, the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and on the other hand, high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused, wherein the weighting function is win, P) =r, x wp (x,y) +7, xwy(x,¥) 1, =K, «(1-L05),) =12,.N.
Nis the number of layers for wavelet transformation, and K, is an enhancement coefficient.
This algorithm uses all the high-frequency information, thus avoiding dark images caused by inverse wavelet transformation, also avoiding undesirable effects of unnecessary interference information due to irregular changes in high-frequency coefficients after wavelet transformation, and enhancing image edge information.
As shown in Fig. 2, in the Snake model image segmentation and positioning method based on a competitive neural network, the competitive neural network is a master-slave network, and the slave network is a Kohonen network group. After initial segmentation of the images, the results are used to initialize the state of neurons in the master network, and the state is dynamically evolved until the neurons are converged to an attractor of the master network.
If an LxL image fli, j) (7, j=1.2,...,L) has M different gray levels, a network having L*LxM neurons is established according to the method of setting M neurons for each pixel. The mm neuron at the pixel (7, j) is N,,, and its active value is v,, , representing that the pixel (i, j) has the possibility of gray m. Obviously, 0<v, <1, and Af > ¥,, =1. The intensity of interconnection from the neuron N,, to Ney iS Tm n=l and Ti zm = Temmis assumed. Each neuron in the network receives the inputs of
LL M itself and other neurons. The function A, => >> 7... Vea of a network state k Im vector J represents the total effect of the active values of other neurons on Nim, wherein the state vector is v= (Waar Visas Va Vases Vises Vin) and the energy
LL M LL XM Ev) = SSDS) Tm sin “Vim function of the network at the state vector is i=l Jal mel kel [sl ned The next Snake model is a process of minimizing the energy function.
Described above are merely preferred embodiments of the present application, and the present application is not limited thereto. Various modifications and variations may be made to the present application for those skilled in the art. Any modification, equivalent substitution, improvement or the like made within the spirit and principle of the present application shall fall into the protection scope of the present application.
Although the specific embodiments of the present invention are described above in combination with the accompanying drawing, the protection scope of the present invention is not limited thereto. It should be understood by those skilled in the art that various modifications or variations could be made by those skilled in the art based on the technical solutions of the present invention without any creative effort, and these modifications or variations shall fall into the protection scope of the present invention.
权利要求:
Claims (9)
[1]
An accurate positioning system for a three-dimensional location of a tumor by means of multi-image fusion, comprising: an image pre-processing module configured for pre-processing multi-modal collected images; a registration and fusion module configured for registration and alignment of the images based on mutual information and for the fusion of the aligned images; and a segmentation and reconstruction module configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the location of a tumor.
[2]
The precision positioning system for a three-dimensional location of a tumor by means of the fusion of multiple images according to claim 1, wherein the image preprocessing module preprocesses the images by means of median filtering, wherein for an image a rectangular scroll window is generated with each pixel in the image as center, then all pixels in this window are sorted according to the grayscale from smallest to largest, a median of the sorted row is calculated, and this median is used to replace the pixel value of a central point in the window.
[3]
The accurate positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 2, wherein a one-dimensional row ff, ... f, is assumed, the length of the window is m, the median filtering on the row m numbers fi fia, Fico, fier, fi from the input row, where ì is the central position of the window, v= (m - 1}/2, then the m points are sorted according to their numerical values, and the number in the center is used as the filtering output,
[4]
The precision positioning system for a three-dimensional location of a tumor by means of multi-image fusion according to claim 1, wherein the image preprocessing module for preprocessing the images by an image edge enhancement based on a wavelet transform, a wavelet transform decomposition by means of an algorithm of Mallat on the median filtering is performed, the decomposition scales are multiple layers, each layer of the wavelet decomposition decomposes an image to be decomposed into a plurality of subband images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are considered noise, the noise by setting a custom threshold is filtered, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving and layering the quality of the image density and visual effects are increased.
[5]
The precise positioning system for a three-dimensional tumor location by means of the multi-image fusion of claim 4, wherein the decomposition scales comprise three layers.
[6]
The accurate positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the registration and fusion module is configured to calculate the mutual information by means of one of two images as a reference image and the other as a floating image by: first performing a coordinate transformation and by registering the transformed pixels of the floating image F to the reference image R, the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and by obtaining gray values of the corresponding points on the reference image R by interpolation, by rotating each pixel of the floating image F and then registering the pixels to the reference image R, by creating a common histogram and edge probability distribution through the transformed pairs of pixels, by thus to obtain the mutual information; wherein, when the mutual information is maximal, the two images are geometrically aligned.
[7]
The precision positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the registration and fusion module is configured to transfer the images by means of a wavelet-pyramid fusion, which comprises certain layers of an orthogonal wavelet transformation. on the reference image and on the floating image, participating in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information, and diagonal information, wherein the low-frequency information similarly obtains each layer are processed, and so on.
[8]
The precise positioning system for a three-dimensional tumor location by means of the multi-image fusion according to claim 7, wherein the low-frequency portion of the last layer is fused by taking the largest coefficient; and wavelet transform coefficients in the high frequencies on each layer in the three other directions are weighted and fused hierarchically and linearly.
[9]
The precision positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the segmentation and reconstruction module is configured to provide segmentation by image segmentation by means of a Snake model and a positioning based on a competitive neural network, and after an initial segmentation of the images, the results are used for the initialization of the neuron state in a master network and the neuron state is dynamically developed to convergence.
类似技术:
公开号 | 公开日 | 专利标题
NL2025814B1|2021-12-14|Precise positioning system for three-dimensional position of tumor by multi-image fusion
Roth et al.2014|A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations
Hou et al.2019|Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model
Karnan et al.2010|Improved implementation of brain MRI image segmentation using ant colony system
DE102007046582A1|2008-05-08|System and method for segmenting chambers of a heart in a three-dimensional image
Singh et al.2019|Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain
Miao et al.2020|Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning
Karthik et al.2015|A comprehensive framework for classification of brain tumour images using SVM and curvelet transform
Zhong et al.2019|Boosting‐based cascaded convolutional neural networks for the segmentation of CT organs‐at‐risk in nasopharyngeal carcinoma
Gan et al.2015|BM3D-based ultrasound image denoising via brushlet thresholding
Lahoud et al.2019|Zero-learning fast medical image fusion
Zhao et al.2010|Study of image segmentation algorithm based on textural features and neural network
CN103366348A|2013-10-23|Processing method and processing device for restraining bone image in X-ray image
Azam et al.2021|Multimodal Medical Image Registration and Fusion for Quality Enhancement
Singh et al.2021|An unsupervised orthogonal rotation invariant moment based fuzzy C-means approach for the segmentation of brain magnetic resonance images
Liu et al.2020|Multimodal medical image fusion using rolling guidance filter with CNN and nuclear norm minimization
Cui et al.2017|Application of neural network based on sift local feature extraction in medical image classification
Kabir2020|Early stage brain tumor detection on MRI image using a hybrid technique
Valverde et al.2016|Multiple sclerosis lesion detection and segmentation using a convolutional neural network of 3D patches
Javed et al.2017|Weighted fusion of MRI and PET images based on fractal dimension
Heller et al.2018|Computer aided diagnosis of skin lesions from morphological features
Namburete et al.2018|Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas
Jasionowska et al.2019|Wavelet convolution neural network for classification of spiculated findings in mammograms
Li et al.2008|Parallel multimodal medical image fusion in 3D conformal radiotherapy treatment planning
Papkov et al.2021|Noise2Stack: Improving image restoration by learning from volumetric data
同族专利:
公开号 | 公开日
NL2025814B1|2021-12-14|
CN110660063A|2020-01-07|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US20080292194A1|2005-04-27|2008-11-27|Mark Schmidt|Method and System for Automatic Detection and Segmentation of Tumors and Associated Edema in Magnetic Resonance Images|
CN107610162A|2017-08-04|2018-01-19|浙江工业大学|A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation|
CN109035160A|2018-06-29|2018-12-18|哈尔滨商业大学|The fusion method of medical image and the image detecting method learnt based on fusion medical image|CN111228655A|2020-01-14|2020-06-05|于金明|Monitoring method and device based on virtual intelligent medical platform and storage medium|
CN111477304A|2020-04-03|2020-07-31|北京易康医疗科技有限公司|Tumor irradiation imaging combination method for fusing PETimage and MRIimage|
法律状态:
优先权:
申请号 | 申请日 | 专利标题
CN201910888197.XA|CN110660063A|2019-09-19|2019-09-19|Multi-image fused tumor three-dimensional position accurate positioning system|
[返回顶部]